Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up LargeBocSerializer with bulk cells reading #1533

Open
wants to merge 2 commits into
base: testnet
Choose a base branch
from

Conversation

dungeon-master-666
Copy link
Member

This PR speeds up of vm::LargeBocSerializer by implementing batch fetching with RocksDB::MultiGet instead of sequential RocksDB::Get operations. Key changes include:

  • Replaced depth-first search (DFS) traversal with breadth-first search (BFS)
  • Implemented level-by-level cell fetching using batched MultiGet, with max batch size of 4 mln.
  • This implementation loads batch size cells in memory, which leads to increased memory usage (with batch size 4mln - at peak 15-20gb)
  • Since in-memory implementation vm::BagOfCells still uses DFS, the cell order in resulting BoC differs from result of vm::LargeBocSerializer. Therefore test that performs BoC byte to byte comparison in test-db.cpp was disabled.

This change speeds up state serialization up to x3 times.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant